Derivation of Coordinate Descent Algorithms from Optimal Control Theory

نویسندگان

چکیده

Recently, it was posited that disparate optimization algorithms may be coalesced in terms of a central source emanating from optimal control theory. Here we further this proposition by showing how coordinate descent derived emerging new principle. In particular, show basic can using maximum principle and collection max functions as “control” Lyapunov functions. The convergence the resulting is thus connected to controlled dissipation their corresponding operational metric for search vector all cases given Hessian convex objective function.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Coordinate descent algorithms

Coordinate descent algorithms solve optimization problems by successively performing approximate minimization along coordinate directions or coordinate hyperplanes. They have been used in applications for many years, and their popularity continues to grow because of their usefulness in data analysis, machine learning, and other areas of current interest. This paper describes the fundamentals of...

متن کامل

Coordinate Descent Algorithms for Phase Retrieval

Phase retrieval aims at recovering a complex-valued signal from magnitude-only measurements, which attracts much attention since it has numerous applications in many disciplines. However, phase recovery involves solving a system of quadratic equations, indicating that it is a challenging nonconvex optimization problem. To tackle phase retrieval in an effective and efficient manner, we apply coo...

متن کامل

Coordinate Descent Algorithms for Lasso Penalized Regression

Imposition of a lasso penalty shrinks parameter estimates toward zero and performs continuous model selection. Lasso penalized regression is capable of handling linear regression problems where the number of predictors far exceeds the number of cases. This paper tests two exceptionally fast algorithms for estimating regression coefficients with a lasso penalty. The previously known ℓ2 algorithm...

متن کامل

Penalized Bregman Divergence Estimation via Coordinate Descent

Variable selection via penalized estimation is appealing for dimension reduction. For penalized linear regression, Efron, et al. (2004) introduced the LARS algorithm. Recently, the coordinate descent (CD) algorithm was developed by Friedman, et al. (2007) for penalized linear regression and penalized logistic regression and was shown to gain computational superiority. This paper explores...

متن کامل

Simultaneous shape and motion recovery: geometry, optimal estimation, and coordinate descent algorithms

AbsIractWith few exceptions, most previous approaches to the structure from motion problem have been based on a decoupling between shape and motion recovery, usually via discrete or differential versions of the epipolar constraint. This paper offers a differential geometric framework for the simultaneous shape and motion recovery problem. We fin1 pose the simultaneous shape and motion recovery ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Operations Research Forum

سال: 2023

ISSN: ['2662-2556']

DOI: https://doi.org/10.1007/s43069-023-00215-6